考虑以下优化问题:给定$ n \ times n $矩阵$ a $和$ \ lambda $,最大化$ \ langle a,u \ lambda u^*\ rangle $,其中$ u $ $ u $在unital Group $ \ mathrm上变化{u}(n)$。这个问题试图通过矩阵大约$ a $,其频谱与$ \ lambda $相同,并且通过将$ \ lambda $设置为适当的对角矩阵,可以恢复矩阵近似问题,例如pca和等级$ k $近似。我们研究了在使用用户的私人数据构建矩阵$ a $的设置中,为这种优化问题设计差异化私有算法的问题。我们给出有效的私有算法,在近似误差上带有上和下限。我们的结果统一并改进了有关私人矩阵近似问题的几项先前的作品。他们依靠格拉斯曼尼亚人的包装/覆盖数量范围扩展到应该具有独立利益的单一轨道。
translated by 谷歌翻译
在均匀的Lipschitzness的简单假设下,即每样本样本梯度均匀地界限的大多数先前的收敛结果是在均匀的私有随机梯度下降(DP-SGD)中得出的。在许多问题,例如使用高斯数据的线性回归中,此假设是不现实的。我们可以通过假设每个样本梯度具有\ textit {样品依赖性}上限,即每样本的Lipschitz常数,而它们本身可能是无限的,那么我们就会放松均匀的唇。当按样本Lipschitz常数具有有限的矩时,我们在凸函数和非凸函数上得出DP-SGD的新收敛结果。此外,我们还提供了针对DP-SGD中选择剪辑标准的原则指导,以使其满足我们轻松的Lipschitzness的凸设置,而无需在Lipschitz常数上做出分配假设。我们通过基准测试数据集的实验来验证建议的有效性。
translated by 谷歌翻译
现有理论预测,数据异质性将降低联邦平均(FedAvg)算法在联合学习中的性能。但是,实际上,简单的FedAvg算法的收敛良好。本文解释了与以前的理论预测相矛盾的FedAvg的看似不合理的有效性。我们发现,在以前的理论分析中,有界梯度差异的关键假设太悲观了,无法表征实际应用中的数据异质性。对于一个简单的二次问题,我们证明存在很大的梯度差异对FedAvg的收敛性没有任何负面影响。在这一观察结果的推动下,我们提出了一个新的数量,最佳的平均漂移,以衡量数据异质性的效果,并明确使用它来提出对FedAvg的新理论分析。我们表明,在许多实际联合训练任务中,最佳的平均漂移几乎为零,而梯度差异可能很大。我们的新分析表明,FedAvg可以在均质和异质数据设置中具有相同的收敛速率,因此可以更好地理解其经验成功。
translated by 谷歌翻译
Fokker-Planck方程(FPE)是控制IT \^o过程密度演变的部分微分方程,并且对统计物理学和机器学习的文献非常重要。 FPE可以被视为连续性方程,其中密度的变化完全由时间变化的速度场决定。重要的是,此速度场也取决于当前密度函数。结果,可以证明地面真相速度字段是固定点方程的解决方案,即我们称之为自洽的属性。在本文中,我们利用这一概念来设计假设速度字段的潜在功能,并证明,如果在训练过程中这样的功能减少到零,则假设速度场产生的密度轨迹会收敛到解决方案转化为解决方案。 Wasserstein-2的FPE。所提出的潜在函数可与基于神经网络的参数化相提并论,因为可以有效地计算相对于参数的随机梯度。一旦训练了一个参数化模型,例如神经普通微分方程,我们就可以生成FPE的整个轨迹。
translated by 谷歌翻译
联合学习(FL)可从分散的隐私敏感数据中学习,并在Edge客户端进行原始数据的计算。本文介绍了混合FL,其中包含在协调服务器上计算出的附加损失项(同时维护FL的私人数据限制)。有很多好处。例如,可以利用其他数据中心数据从集中式(数据中心)共同学习,并分散(联合)培训数据,并更好地匹配预期的推断数据分布。混合FL还可以将一些密集的计算(例如,将正则化)卸载到服务器中,从而大大减少了通信和客户端计算负载。对于这些和其他混合FL用例,我们提出了三种算法:平行训练,1向梯度转移和2向梯度转移。我们陈述了每种融合界限,并提供适合特定混合FL问题的直觉。最后,我们对三个任务进行了广泛的实验,表明混合FL可以将训练数据融合以达到推理分布上的准确性,并可以将通信和计算开销降低90%以上。我们的实验证实了关于算法在不同的混合FL问题设置下的性能的理论预测。
translated by 谷歌翻译
We initiate a formal study of reproducibility in optimization. We define a quantitative measure of reproducibility of optimization procedures in the face of noisy or error-prone operations such as inexact or stochastic gradient computations or inexact initialization. We then analyze several convex optimization settings of interest such as smooth, non-smooth, and strongly-convex objective functions and establish tight bounds on the limits of reproducibility in each setting. Our analysis reveals a fundamental trade-off between computation and reproducibility: more computation is necessary (and sufficient) for better reproducibility.
translated by 谷歌翻译
我们提出并分析了算法,以解决用户级差分隐私约束下的一系列学习任务。用户级DP仅保证只保证个人样本的隐私,而是保护用户的整个贡献($ M \ GE 1 $ Samples),而不是对信息泄漏提供更严格但更现实的保护。我们表明,对于高维平均估计,具有平稳损失,随机凸优化和学习假设类别的经验风险最小化,具有有限度量熵,隐私成本随着用户提供的$ O(1 / \ SQRT {M})$减少更多样本。相比之下,在增加用户数量$ N $时,隐私成本以较快的价格降低(1 / n)$率。我们将这些结果与下界相提并论,显示了我们算法的最低限度估计和随机凸优化的算法。我们的算法依赖于私有平均估计的新颖技术,其任意维度与误差缩放为浓度半径$ \ tai $的分布而不是整个范围。
translated by 谷歌翻译
Federated Averaging (FEDAVG) has emerged as the algorithm of choice for federated learning due to its simplicity and low communication cost. However, in spite of recent research efforts, its performance is not fully understood. We obtain tight convergence rates for FEDAVG and prove that it suffers from 'client-drift' when the data is heterogeneous (non-iid), resulting in unstable and slow convergence.As a solution, we propose a new algorithm (SCAFFOLD) which uses control variates (variance reduction) to correct for the 'client-drift' in its local updates. We prove that SCAFFOLD requires significantly fewer communication rounds and is not affected by data heterogeneity or client sampling. Further, we show that (for quadratics) SCAFFOLD can take advantage of similarity in the client's data yielding even faster convergence. The latter is the first result to quantify the usefulness of local-steps in distributed optimization.
translated by 谷歌翻译
Several recently proposed stochastic optimization methods that have been successfully used in training deep networks such as RMSPROP, ADAM, ADADELTA, NADAM are based on using gradient updates scaled by square roots of exponential moving averages of squared past gradients. In many applications, e.g. learning with large output spaces, it has been empirically observed that these algorithms fail to converge to an optimal solution (or a critical point in nonconvex settings). We show that one cause for such failures is the exponential moving average used in the algorithms. We provide an explicit example of a simple convex optimization setting where ADAM does not converge to the optimal solution, and describe the precise problems with the previous analysis of ADAM algorithm. Our analysis suggests that the convergence issues can be fixed by endowing such algorithms with "long-term memory" of past gradients, and propose new variants of the ADAM algorithm which not only fix the convergence issues but often also lead to improved empirical performance.
translated by 谷歌翻译
Generative models have been widely studied in computer vision. Recently, diffusion models have drawn substantial attention due to the high quality of their generated images. A key desired property of image generative models is the ability to disentangle different attributes, which should enable modification towards a style without changing the semantic content, and the modification parameters should generalize to different images. Previous studies have found that generative adversarial networks (GANs) are inherently endowed with such disentanglement capability, so they can perform disentangled image editing without re-training or fine-tuning the network. In this work, we explore whether diffusion models are also inherently equipped with such a capability. Our finding is that for stable diffusion models, by partially changing the input text embedding from a neutral description (e.g., "a photo of person") to one with style (e.g., "a photo of person with smile") while fixing all the Gaussian random noises introduced during the denoising process, the generated images can be modified towards the target style without changing the semantic content. Based on this finding, we further propose a simple, light-weight image editing algorithm where the mixing weights of the two text embeddings are optimized for style matching and content preservation. This entire process only involves optimizing over around 50 parameters and does not fine-tune the diffusion model itself. Experiments show that the proposed method can modify a wide range of attributes, with the performance outperforming diffusion-model-based image-editing algorithms that require fine-tuning. The optimized weights generalize well to different images. Our code is publicly available at https://github.com/UCSB-NLP-Chang/DiffusionDisentanglement.
translated by 谷歌翻译